Scalable Methods for Computing State Similarity in Deterministic Markov Decision Processes
نویسندگان
چکیده
منابع مشابه
Methods for Computing State Similarity in Markov Decision Processes
A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al., 2003; Ferns et al., 2004). An integral component of such metrics is the Kantorovich...
متن کاملQuasi-Deterministic Partially Observable Markov Decision Processes
We study a subclass of POMDPs, called quasi-deterministic POMDPs (QDET-POMDPs), characterized by deterministic actions and stochastic observations. While this framework does not model the same general problems as POMDPs, they still capture a number of interesting and challenging problems and, in some cases, have interesting properties. By studying the observability available in this subclass, w...
متن کاملScalable Verification of Markov Decision Processes
Markov decision processes (MDP) are useful to model concurrent process optimisation problems, but verifying them with numerical methods is often intractable. Existing approximative approaches do not scale well and are limited to memoryless schedulers. Here we present the basis of scalable verification for MDPSs, using an O(1) memory representation of history-dependent schedulers. We thus facili...
متن کاملIncremental methods for computing Markov decision
Partially observable M arkov decision processes (POMDPS) allow one to model complex dynamic decision or control problems that include both action outcome uncertainty and imperfect observability. The control problem is formulated as a dynamic optimization problem with a value function combining costs or rewards from multiple steps. In this paper we propose, analyse and test various incremental m...
متن کاملInterval Methods for Uncertain Markov Decision Processes
In this paper, the average cases of Markov decision processes with uncertainty is considered. That is, a controlled Markov set-chain model with a finite state and action space is developed by an interval arithmetic analysis, and we will find a Pareto optimal policy which maximizes the average expected rewards over all stationary policies under a new partial order. The Pareto optimal policies is...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i06.6564